₿ Explore the crypto world with us...

₿ EXPLORER
AI News

OpenAI and Google DeepMind employees warn: Is AI getting out of control?

A new development has had significant repercussions in the world of technology. Former and current employees of OpenAI and Google DeepMind, two leading companies in artificial intelligence, have published open letters warning about the potential risks of advanced AI technologies.

These open letters, signed by seven former OpenAI employees, four current OpenAI employees, one former Google DeepMind employee, and one current Google DeepMind employee, reveal striking details about the inner workings of these companies. They express serious concerns about the management of AI risks and convey a clear message about the potential dangers of these technologies. Here are the details…


OpenAI and Google DeepMind employees warned everyone about the risks that may arise with advanced artificial intelligence

In the published open letter, it was emphasized that artificial intelligence technologies can exacerbate existing inequalities, lead to the spread of manipulation and misinformation, and pose existential threats to humanity if autonomous AI systems lose control.

Pointing out that companies accept these risks, the employees state that current corporate governance structures are insufficient to address these problems. Additionally, the employees themselves explained that AI companies have strong financial incentives to withhold information about their risk levels and avoid audits.

Employees said that broad non-disclosure agreements, created due to a lack of government oversight, also prevented them from articulating the risks:


The Right to Warn About Advanced Artificial Intelligence

“As people who work or have worked in advanced artificial intelligence companies, we believe in the potential of AI technology to deliver unprecedented benefits to humanity.

At the same time, we are aware of the serious risks posed by these technologies. These risks range from the further consolidation of existing inequalities to manipulation and the spread of misinformation, and can lead to the extinction of humanity if autonomous AI systems lose control.

AI companies acknowledge these risks. Companies like OpenAI, Anthropic, and Google DeepMind, as well as governments around the world, including the US, the UK, and the 26 countries in the Bletchley Declaration, and other AI experts, are also aware of these risks.

We are hopeful that, with adequate guidance from the scientific community, politicians, and the public, these risks can be appropriately mitigated. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe that dedicated corporate governance structures will be sufficient to change that.

AI companies possess important non-public information about the capabilities and limitations of their systems, the adequacy of protective measures, and the risk levels of different types of harm. But currently, they have weak obligations to share some of this information with governments, and none with civil society. Nor do we believe that all of them can be trusted to voluntarily share all the information.

Without effective government oversight of AI companies, current and former employees are among the very few who can hold them accountable to the public. However, broad non-disclosure agreements allow us to raise our concerns only to companies that do not address these issues.

Existing whistleblower protections are not adequate as they focus on illegal activities, whereas many of the risks we are concerned about are not yet regulated. Given the history of similar cases in the industry, some of us are rightly afraid of various reprisals. Nor are we the first to face or raise these issues.”

That’s why we’re calling on advanced AI companies to adhere to the following principles:

The Company will not enter into or enforce any agreement that prohibits criticism or denigration of the Company due to risk-related concerns, nor retaliate by preventing any economic gain due to risk-related criticism.

The Company will initiate a process to ensure that its current and former employees can anonymously raise concerns about risks to the company’s board of directors, regulators, and an independent entity with relevant expertise.

The Company will promote a culture of open criticism that permits current and former employees to communicate risk-related concerns to the public, the company’s board of directors, regulators, or an independent entity with relevant expertise, provided that trade secrets and other intellectual property interests are appropriately protected.

The Company will not retaliate against current and former employees who make confidential information about risks public after other processes have failed. We recognize that efforts to report risk concerns should not unnecessarily disclose confidential information. Therefore, where there is an adequate process by which employees can anonymously report concerns to the company’s board of directors, regulators, and an independent organization with relevant expertise, we recognize that concerns should be raised through this process first. However, as long as such a process does not exist, we believe that current and former employees should retain the freedom to report their concerns to the public.

Former OpenAI employees Jacob Hilton, Daniel Kokotajlo, William Saunders, Carroll Wainwright, and Daniel Ziegler, as well as former Google DeepMind employee Ramana Kumar and current Google DeepMind employee Neel Nanda, are among the signatories of this stunning open letter. In addition, four current and two former OpenAI employees anonymously signed the open letter.

Although this letter reveals that artificial intelligence technologies pose serious risks, it does not share any details about what these risks are. As stated by the employees, it would be appropriate for governments to initiate the necessary inspections in this regard immediately.

In this process, it is vital for companies developing artificial intelligence to be more transparent and to establish effective control mechanisms by governments. How do you interpret this open letter? What risks do you think advanced AI hides behind it?


You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

MetaversePlanet

Metaverse Planet is your gateway to the exciting world of artificial intelligence. On this platform, you can find everything related to artificial intelligence:

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Milla Sofia: Fascinating AI Model Shares Striking Visuals 6 Most Followed Cryptocurrencies on Twitter Web 2.0 to Web 3.0 Lacoste Enters Metaverse Artificial intelligence FAQs , About Artificial intelligence Replace your daily applications with AI-powered alternatives ✅ Our Smartphone Applications Discover the Popular Metaverse Coins Binance vs Ethereum Metaverse Ecosystem Founder of Ethereum: Vitalik Buterin How to Enter Metaverse? Gucci Chose Miley Cyrus Avatar for Web3 Fragrance! Those who have been doing Hodl lately are very comfortable. Controversial AI Sensation Milla Sofia Under Fire for Provocative Appearance India’s First Metaverse Wedding: Over 3,000 Guests Celebrate How to Make an Avatar on Instagram? Easy Explanation with Pictures Which Is Your Choice? DOGE or SHIBA ? Fan Token Ecosystem 6 Most Followed Cryptocurrencies on Twitter Top 8 NFT Sales Sites! (Create Paid And Free NFT!) What is Decentraland? (MANA) Coin Before having nft after having This Man Told Everyone To Buy Bitcoin For $1 Just 8 Years Ago Differences between crypto and bank Popular AI Coins